Learning by CHIR without Storing Internal Representations

نویسندگان

  • Dimitry Nabutovsky
  • Tal Grossman
  • Eytan Domany
چکیده

A new learning algorithm for feedforward networks, learning by choice of intern al represent at ions (C HIR), was recently introduced [1,2]. W hereas many algor it hm s red uce th e learning proce ss to minimizing a cost function over t he weights, our method treats th e internal representations as the funda ment al ent it ies to be determi ned. T he algo rithm applied a sea rch procedure in the space of intern al representations, together wit h coop er ati ve adaptation of the weights (e.g ., by using perceptron learning). Tentative guesses of the in ternal representations, however , had to be sto red in memory. Here we present a new version , CHIR2, which eliminates the need to store internal repr esent at ions and at the same time is fast er than the original algo rithm . We first describ e a basic version of CI-IIR2, tailored for networks wit h a single output and one hidden layer. We tested it on three problems contiguity, symmet ry, and parity and compared its performan ce wit h baekpr opagation. For all these pr oblems our algorith m is 30100 times faster th an backpr opagation , and , most significan tly, learning time increases mor e slowly with system size. Next, we show how to modify the algori thm for networks with many outpu t uni ts an d more t han one hid den layer. T his version is tested on the combine d pa ri ty-l-symm etry problem and on the rand om asso ciations task . A third modification of the new algori thm , suit able for networks with binary weights (all weight s and thresholds are equal to ±1) , is also described, and t ests of it s performance on the parity and the ran dom teacher problems are repor ted .

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The CHIR Algorithm for Feed Forward Networks with Binary Weights

A new learning algorithm, Learning by Choice of Internal Represetations (CHIR), was recently introduced. Whereas many algorithms reduce the learning process to minimizing a cost function over the weights, our method treats the internal representations as the fundamental entities to be determined. The algorithm applies a search procedure in the space of internal representations, and a cooperativ...

متن کامل

Learning by Choice of Internal Representations: An Energy Minimization Approach

Learning by choice of internal representations (CHIR) is a learning algorithm for a multilayer neural network system, suggested by Grossman et al, [1,2) and based upon determining the internal representations of the system as well as its internal weights. In this paper, we propose an energy minimization approach whereby the internal representations (IR) as well as the weight matrix are allowed ...

متن کامل

Prospective Hardware Implementation of the Chir Neural Network Algorithm

I review the recently developed Choice of Internal Representations (CHIR) training algorithm for multi-layer perceptrons, with an emphasis on relevant properties for hardware implementation. A comparison to the common error back-propagation algorithm shows that there are potential advantages in realizing CHIR in hard-

متن کامل

Phase synchronization and chaotic dynamics in Hebbian learned artificial recurrent neural networks

All experiments and results to be discovered in this paper have to be assessed at the crossroad of two basic lines of research: increasing the storing capacity of recurrent neural networks as much as possible and observing and studying how this increase impacts the dynamical regimes proposed by the net in order to allow such a huge storing. Seminal observations performed by Skarda and Freeman [...

متن کامل

An interpretative recurrent neural network to improve pattern storing capabilities - dynamical considerations

Seminal observations performed by Skarda and Freeman [1] on the olfactory bulb of rabbits during cognitive tasks have suggested to locate the basal state of behavior in the network’s spatio-temporal dynamics. Following these neurophysiological observations, a new learning task for recurrent neural networks has been proposed by the authors in recent papers [2], [3]. This task consists in storing...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Complex Systems

دوره 4  شماره 

صفحات  -

تاریخ انتشار 1990